625 research outputs found

    Cultural differences in perceiving sounds generated by others: self matters

    Get PDF
    Sensory consequences resulting from own movements receive different neural processing compared to externally generated sensory consequences (e.g., by a computer), leading to sensory attenuation, i.e., a reduction in perceived loudness or brain evoked responses. However, discrepant findings exist from different cultural regions about whether sensory attenuation is also present for sensory consequences generated by others. In this study, we performed a cross culture (between Chinese and British) comparison on the processing of sensory consequences (perceived loudness) from self and others compared to an external source in the auditory domain. We found a cultural difference in processing sensory consequences generated by others, with only Chinese and not British showing the sensory attenuation effect. Sensory attenuation in this case was correlated with independent self-construal scores. The sensory attenuation effect for self-generated sensory consequences was not replicated. However, a correlation with delusional ideation was observed for British. These findings are discussed with respects to mechanisms of sensory attenuation

    Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features

    Get PDF
    During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6–1.3 Hz), words (1.8–3 Hz), syllables (2.8–4.8 Hz), and phonemes (8–12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13–30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory–motor pathway

    Alpha power increase after transcranial alternating current stimulation at alpha frequency (α-tacs) reflects plastic changes rather than entrainment

    Get PDF
    Background: Periodic stimulation of occipital areas using transcranial alternating current stimulation (tACS) at alpha (α) frequency (8–12 Hz) enhances electroencephalographic (EEG) α-oscillation long after tACS-offset. Two mechanisms have been suggested to underlie these changes in oscillatory EEG activity: tACS-induced entrainment of brain oscillations and/or tACS-induced changes in oscillatory circuits by spike-timing dependent plasticity.<p></p> Objective: We tested to what extent plasticity can account for tACS-aftereffects when controlling for entrainment “echoes.” To this end, we used a novel, intermittent tACS protocol and investigated the strength of the aftereffect as a function of phase continuity between successive tACS episodes, as well as the match between stimulation frequency and endogenous α-frequency.<p></p> Methods: 12 healthy participants were stimulated at around individual α-frequency for 15–20 min in four sessions using intermittent tACS or sham. Successive tACS events were either phase-continuous or phase-discontinuous, and either 3 or 8 s long. EEG α-phase and power changes were compared after and between episodes of α-tACS across conditions and against sham.<p></p> Results: α-aftereffects were successfully replicated after intermittent stimulation using 8-s but not 3-s trains. These aftereffects did not reveal any of the characteristics of entrainment echoes in that they were independent of tACS phase-continuity and showed neither prolonged phase alignment nor frequency synchronization to the exact stimulation frequency.<p></p> Conclusion: Our results indicate that plasticity mechanisms are sufficient to explain α-aftereffects in response to α-tACS, and inform models of tACS-induced plasticity in oscillatory circuits. Modifying brain oscillations with tACS holds promise for clinical applications in disorders involving abnormal neural synchrony

    Brain rhythms of pain

    Get PDF
    Pain is an integrative phenomenon that results from dynamic interactions between sensory and contextual (i.e., cognitive, emotional, and motivational) processes. In the brain the experience of pain is associated with neuronal oscillations and synchrony at different frequencies. However, an overarching framework for the significance of oscillations for pain remains lacking. Recent concepts relate oscillations at different frequencies to the routing of information flow in the brain and the signaling of predictions and prediction errors. The application of these concepts to pain promises insights into how flexible routing of information flow coordinates diverse processes that merge into the experience of pain. Such insights might have implications for the understanding and treatment of chronic pain

    Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality

    Get PDF
    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory–visual vs. visual–auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory–visual) did not affect the other type (e.g. visual–auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration

    Low-frequency oscillatory correlates of auditory predictive processing in cortical-subcortical networks: a MEG-study

    Get PDF
    Emerging evidence supports the role of neural oscillations as a mechanism for predictive information processing across large-scale networks. However, the oscillatory signatures underlying auditory mismatch detection and information flow between brain regions remain unclear. To address this issue, we examined the contribution of oscillatory activity at theta/alpha-bands (4–8/8–13 Hz) and assessed directed connectivity in magnetoencephalographic data while 17 human participants were presented with sound sequences containing predictable repetitions and order manipulations that elicited prediction-error responses. We characterized the spectro-temporal properties of neural generators using a minimum-norm approach and assessed directed connectivity using Granger Causality analysis. Mismatching sequences elicited increased theta power and phase-locking in auditory, hippocampal and prefrontal cortices, suggesting that theta-band oscillations underlie prediction-error generation in cortical-subcortical networks. Furthermore, enhanced feedforward theta/alpha-band connectivity was observed in auditory-prefrontal networks during mismatching sequences, while increased feedback connectivity in the alpha-band was observed between hippocampus and auditory regions during predictable sounds. Our findings highlight the involvement of hippocampal theta/alpha-band oscillations towards auditory prediction-error generation and suggest a spectral dissociation between inter-areal feedforward vs. feedback signalling, thus providing novel insights into the oscillatory mechanisms underlying auditory predictive processing

    Lasting EEG/MEG aftereffects on human brain oscillations after rhythmic transcranial brain stimulation: Level of control over oscillatory network activity

    Get PDF
    A number of rhythmic protocols have emerged for non-invasive brain stimulation (NIBS) in humans, including transcranial alternating current stimulation (tACS), oscillatory transcranial direct current stimulation (otDCS) and repetitive (also called rhythmic) transcranial magnetic stimulation (rTMS). With these techniques, it is possible to match the frequency of the externally applied electromagnetic fields to the intrinsic frequency of oscillatory neural population activity ("frequency-tuning"). Mounting evidence suggests that by this means tACS, otDCS, and rTMS can entrain brain oscillations and promote associated functions in a frequency-specific manner, in particular during (i.e. online to) stimulation. Here, we focus instead on the changes in oscillatory brain activity that persist after the end of stimulation. Understanding such aftereffects in healthy participants is an important step for developing these techniques into potentially useful clinical tools for the treatment of specific patient groups. Reviewing the electrophysiological evidence in healthy participants, we find aftereffects on brain oscillations to be a common outcome following tACS/otDCS and rTMS. However, we did not find a consistent, predictable pattern of aftereffects across studies, which is in contrast to the relative homogeneity of reported online effects. This indicates that aftereffects are partially dissociated from online, frequency-specific (entrainment) effects during tACS/otDCS and rTMS. We outline possible accounts and future directions for a better understanding of the link between online entrainment and offline aftereffects, which will be key for developing more targeted interventions into oscillatory brain activity

    Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense

    Get PDF
    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV/VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV/VA event-related potentials (ERPs) from the sum of their unisensory constituents, we run a time-resolved topographical representational similarity analysis (tRSA) comparing AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between AV- and VA-maps at each time point (500ms window post-stimulus) and then correlated with two alternative similarity model matrices: AVmaps=VAmaps vs. AVmaps≠VAmaps. The tRSA results favored the AVmaps≠VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems

    Role of the cerebellum in adaptation to delayed action effects

    Get PDF
    Actions are typically associated with sensory consequences. For example, knocking at a door results in predictable sounds. These self-initiated sensory stimuli are known to elicit smaller cortical responses compared to passively presented stimuli, e.g., early auditory evoked magnetic fields known as M100 and M200 components are attenuated. Current models implicate the cerebellum in the prediction of the sensory consequences of our actions. However, causal evidence is largely missing. In this study, we introduced a constant delay (of 100 ms) between actions and action-associated sounds, and we recorded magnetoencephalography (MEG) data as participants adapted to the delay. We found an increase in the attenuation of the M100 component over time for self-generated sounds, which indicates cortical adaptation to the introduced delay. In contrast, no change in M200 attenuation was found. Interestingly, disrupting cerebellar activity via transcranial magnetic stimulation (TMS) abolished the adaptation of M100 attenuation, while the M200 attenuation reverses to an M200 enhancement. Our results provide causal evidence for the involvement of the cerebellum in adapting to delayed action effects, and thus in the prediction of the sensory consequences of our actions

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior
    • …
    corecore